Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Nat Commun ; 15(1): 2681, 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38538600

RESUMO

Ovarian cancer, a group of heterogeneous diseases, presents with extensive characteristics with the highest mortality among gynecological malignancies. Accurate and early diagnosis of ovarian cancer is of great significance. Here, we present OvcaFinder, an interpretable model constructed from ultrasound images-based deep learning (DL) predictions, Ovarian-Adnexal Reporting and Data System scores from radiologists, and routine clinical variables. OvcaFinder outperforms the clinical model and the DL model with area under the curves (AUCs) of 0.978, and 0.947 in the internal and external test datasets, respectively. OvcaFinder assistance led to improved AUCs of radiologists and inter-reader agreement. The average AUCs were improved from 0.927 to 0.977 and from 0.904 to 0.941, and the false positive rates were decreased by 13.4% and 8.3% in the internal and external test datasets, respectively. This highlights the potential of OvcaFinder to improve the diagnostic accuracy, and consistency of radiologists in identifying ovarian cancer.


Assuntos
Neoplasias Ovarianas , Feminino , Humanos , Neoplasias Ovarianas/diagnóstico por imagem , Área Sob a Curva , Extremidades , Radiologistas , Estudos Retrospectivos
2.
IEEE Trans Med Imaging ; PP2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-38215335

RESUMO

Deep learning (DL)-based rib fracture detection has shown promise of playing an important role in preventing mortality and improving patient outcome. Normally, developing DL-based object detection models requires a huge amount of bounding box annotation. However, annotating medical data is time-consuming and expertise-demanding, making obtaining a large amount of fine-grained annotations extremely infeasible. This poses a pressing need for developing label-efficient detection models to alleviate radiologists' labeling burden. To tackle this challenge, the literature on object detection has witnessed an increase of weakly-supervised and semi-supervised approaches, yet still lacks a unified framework that leverages various forms of fully-labeled, weakly-labeled, and unlabeled data. In this paper, we present a novel omni-supervised object detection network, ORF-Netv2, to leverage as much available supervision as possible. Specifically, a multi-branch omni-supervised detection head is introduced with each branch trained with a specific type of supervision. A co-training-based dynamic label assignment strategy is then proposed to enable flexible and robust learning from the weakly-labeled and unlabeled data. Extensive evaluation was conducted for the proposed framework with three rib fracture datasets on both chest CT and X-ray. By leveraging all forms of supervision, ORF-Netv2 achieves mAPs of 34.7, 44.7, and 19.4 on the three datasets, respectively, surpassing the baseline detector which uses only box annotations by mAP gains of 3.8, 4.8, and 5.0, respectively. Furthermore, ORF-Netv2 consistently outperforms other competitive label-efficient methods over various scenarios, showing a promising framework for label-efficient fracture detection. The code is available at: https://github.com/zhizhongchai/ORF-Net/tree/main.

3.
IEEE Rev Biomed Eng ; PP2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38265911

RESUMO

Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.

4.
Med Image Anal ; 86: 102772, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36822050

RESUMO

Multi-label classification (MLC) can attach multiple labels on single image, and has achieved promising results on medical images. But existing MLC methods still face challenging clinical realities in practical use, such as: (1) medical risks arising from misclassification, (2) sample imbalance problem among different diseases, (3) inability to classify the diseases that are not pre-defined (unseen diseases). Here, we design a hybrid label to improve the flexibility of MLC methods and alleviate the sample imbalance problem. Specifically, in the labeled training set, we remain independent labels for high-frequency diseases with enough samples and use a hybrid label to merge low-frequency diseases with fewer samples. The hybrid label can also be used to put unseen diseases in practical use. In this paper, we propose Triplet Attention and Dual-pool Contrastive Learning (TA-DCL) for multi-label medical image classification based on the aforementioned label representation. TA-DCL architecture is a triplet attention network (TAN), which combines category-attention, self-attention and cross-attention together to learn high-quality label embeddings for all disease labels by mining effective information from medical images. DCL includes dual-pool contrastive training (DCT) and dual-pool contrastive inference (DCI). DCT optimizes the clustering centers of label embeddings belonging to different disease labels to improve the discrimination of label embeddings. DCI relieves the error classification of sick cases for reducing the clinical risk and improving the ability to detect unseen diseases by contrast of differences. TA-DCL is validated on two public medical image datasets, ODIR and NIH-ChestXray14, showing superior performance than other state-of-the-art MLC methods. Code is available at https://github.com/ZhangYH0502/TA-DCL.


Assuntos
Processamento de Imagem Assistida por Computador , Aprendizagem , Humanos
5.
Radiol Artif Intell ; 4(5): e210299, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36204545

RESUMO

Purpose: To evaluate the ability of fine-grained annotations to overcome shortcut learning in deep learning (DL)-based diagnosis using chest radiographs. Materials and Methods: Two DL models were developed using radiograph-level annotations (disease present: yes or no) and fine-grained lesion-level annotations (lesion bounding boxes), respectively named CheXNet and CheXDet. A total of 34 501 chest radiographs obtained from January 2005 to September 2019 were retrospectively collected and annotated regarding cardiomegaly, pleural effusion, mass, nodule, pneumonia, pneumothorax, tuberculosis, fracture, and aortic calcification. The internal classification performance and lesion localization performance of the models were compared on a testing set (n = 2922); external classification performance was compared on National Institutes of Health (NIH) Google (n = 4376) and PadChest (n = 24 536) datasets; and external lesion localization performance was compared on the NIH ChestX-ray14 dataset (n = 880). The models were also compared with radiologist performance on a subset of the internal testing set (n = 496). Performance was evaluated using receiver operating characteristic (ROC) curve analysis. Results: Given sufficient training data, both models performed similarly to radiologists. CheXDet achieved significant improvement for external classification, such as classifying fracture on NIH Google (CheXDet area under the ROC curve [AUC], 0.67; CheXNet AUC, 0.51; P < .001) and PadChest (CheXDet AUC, 0.78; CheXNet AUC, 0.55; P < .001). CheXDet achieved higher lesion detection performance than CheXNet for most abnormalities on all datasets, such as detecting pneumothorax on the internal set (CheXDet jackknife alternative free-response ROC [JAFROC] figure of merit [FOM], 0.87; CheXNet JAFROC FOM, 0.13; P < .001) and NIH ChestX-ray14 (CheXDet JAFROC FOM, 0.55; CheXNet JAFROC FOM, 0.04; P < .001). Conclusion: Fine-grained annotations overcame shortcut learning and enabled DL models to identify correct lesion patterns, improving the generalizability of the models.Keywords: Computer-aided Diagnosis, Conventional Radiography, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Localization Supplemental material is available for this article © RSNA, 2022.

6.
IEEE Trans Med Imaging ; 39(11): 3429-3440, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32746096

RESUMO

Training deep neural networks usually requires a large amount of labeled data to obtain good performance. However, in medical image analysis, obtaining high-quality labels for the data is laborious and expensive, as accurately annotating medical images demands expertise knowledge of the clinicians. In this paper, we present a novel relation-driven semi-supervised framework for medical image classification. It is a consistency-based method which exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations, and leverages a self-ensembling model to produce high-quality consistency targets for the unlabeled data. Considering that human diagnosis often refers to previous analogous cases to make reliable decisions, we introduce a novel sample relation consistency (SRC) paradigm to effectively exploit unlabeled data by modeling the relationship information among different samples. Superior to existing consistency-based methods which simply enforce consistency of individual predictions, our framework explicitly enforces the consistency of semantic relation among different samples under perturbations, encouraging the model to explore extra semantic information from unlabeled data. We have conducted extensive experiments to evaluate our method on two public benchmark medical image classification datasets, i.e., skin lesion diagnosis with ISIC 2018 challenge and thorax disease classification with ChestX-ray14. Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Humanos , Tórax
7.
IEEE Trans Med Imaging ; 39(11): 3583-3594, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32746106

RESUMO

Deep learning approaches have demonstrated remarkable progress in automatic Chest X-ray analysis. The data-driven feature of deep models requires training data to cover a large distribution. Therefore, it is substantial to integrate knowledge from multiple datasets, especially for medical images. However, learning a disease classification model with extra Chest X-ray (CXR) data is yet challenging. Recent researches have demonstrated that performance bottleneck exists in joint training on different CXR datasets, and few made efforts to address the obstacle. In this paper, we argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges. Specifically, the imperfect data is in two folds: domain discrepancy, as the image appearances vary across datasets; and label discrepancy, as different datasets are partially labeled. To this end, we formulate the multi-label thoracic disease classification problem as weighted independent binary tasks according to the categories. For common categories shared across domains, we adopt task-specific adversarial training to alleviate the feature differences. For categories existing in a single dataset, we present uncertainty-aware temporal ensembling of model predictions to mine the information from the missing labels further. In this way, our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability. We conduct extensive experiments on three datasets with more than 360,000 Chest X-ray images. Our method outperforms other competing models and sets state-of-the-art performance on the official NIH test set with 0.8349 AUC, demonstrating its effectiveness of utilizing the external dataset to improve the internal classification.


Assuntos
Aprendizado Profundo , Radiografia , Radiografia Torácica , Tórax/diagnóstico por imagem , Raios X
8.
Med Image Anal ; 63: 101695, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32442866

RESUMO

Glaucoma is the leading cause of irreversible blindness in the world. Structure and function assessments play an important role in diagnosing glaucoma. Nowadays, Optical Coherence Tomography (OCT) imaging gains increasing popularity in measuring the structural change of eyes. However, few automated methods have been developed based on OCT images to screen glaucoma. In this paper, we are the first to unify the structure analysis and function regression to distinguish glaucoma patients from normal controls effectively. Specifically, our method works in two steps: a semi-supervised learning strategy with smoothness assumption is first applied for the surrogate assignment of missing function regression labels. Subsequently, the proposed multi-task learning network is capable of exploring the structure and function relationship between the OCT image and visual field measurement simultaneously, which contributes to classification performance improvement. It is also worth noting that the proposed method is assessed by two large-scale multi-center datasets. In other words, we first build the largest glaucoma OCT image dataset (i.e., HK dataset) involving 975,400 B-scans from 4,877 volumes to develop and evaluate the proposed method, then the model without further fine-tuning is directly applied on another independent dataset (i.e., Stanford dataset) containing 246,200 B-scans from 1,231 volumes. Extensive experiments are conducted to assess the contribution of each component within our framework. The proposed method outperforms the baseline methods and two glaucoma experts by a large margin, achieving volume-level Area Under ROC Curve (AUC) of 0.977 on HK dataset and 0.933 on Stanford dataset, respectively. The experimental results indicate the great potential of the proposed approach for the automated diagnosis system.


Assuntos
Glaucoma , Tomografia de Coerência Óptica , Técnicas de Diagnóstico Oftalmológico , Glaucoma/diagnóstico por imagem , Humanos , Aprendizado de Máquina Supervisionado , Campos Visuais
9.
IEEE J Biomed Health Inform ; 24(12): 3431-3442, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32248132

RESUMO

Deep learning has achieved remarkable success in the optical coherence tomography (OCT) image classification task with substantial labelled B-scan images available. However, obtaining such fine-grained expert annotations is usually quite difficult and expensive. How to leverage the volume-level labels to develop a robust classifier is very appealing. In this paper, we propose a weakly supervised deep learning framework with uncertainty estimation to address the macula-related disease classification problem from OCT images with the only volume-level label being available. First, a convolutional neural network (CNN) based instance-level classifier is iteratively refined by using the proposed uncertainty-driven deep multiple instance learning scheme. To our best knowledge, we are the first to incorporate the uncertainty evaluation mechanism into multiple instance learning (MIL) for training a robust instance classifier. The classifier is able to detect suspicious abnormal instances and abstract the corresponding deep embedding with high representation capability simultaneously. Second, a recurrent neural network (RNN) takes instance features from the same bag as input and generates the final bag-level prediction by considering the individually local instance information and globally aggregated bag-level representation. For more comprehensive validation, we built two large diabetic macular edema (DME) OCT datasets from different devices and imaging protocols to evaluate the efficacy of our method, which are composed of 30,151 B-scans in 1,396 volumes from 274 patients (Heidelberg-DME dataset) and 38,976 B-scans in 3,248 volumes from 490 patients (Triton-DME dataset), respectively. We compare the proposed method with the state-of-the-art approaches, and experimentally demonstrate that our method is superior to alternative methods, achieving volume-level accuracy, F1-score and area under the receiver operating characteristic curve (AUC) of 95.1%, 0.939 and 0.990 on Heidelberg-DME and those of 95.1%, 0.935 and 0.986 on Triton-DME, respectively. Furthermore, the proposed method also yields competitive results on another public age-related macular degeneration OCT dataset, indicating the high potential as an effective screening tool in the clinical practice.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Adolescente , Adulto , Retinopatia Diabética/diagnóstico por imagem , Humanos , Edema Macular/diagnóstico por imagem , Retina/diagnóstico por imagem , Aprendizado de Máquina Supervisionado , Adulto Jovem
10.
J Magn Reson Imaging ; 50(4): 1144-1151, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-30924997

RESUMO

BACKGROUND: The usefulness of 3D deep learning-based classification of breast cancer and malignancy localization from MRI has been reported. This work can potentially be very useful in the clinical domain and aid radiologists in breast cancer diagnosis. PURPOSE: To evaluate the efficacy of 3D deep convolutional neural network (CNN) for diagnosing breast cancer and localizing the lesions at dynamic contrast enhanced (DCE) MRI data in a weakly supervised manner. STUDY TYPE: Retrospective study. SUBJECTS: A total of 1537 female study cases (mean age 47.5 years ±11.8) were collected from March 2013 to December 2016. All the cases had labels of the pathology results as well as BI-RADS categories assessed by radiologists. FIELD STRENGTH/SEQUENCE: 1.5 T dynamic contrast-enhanced MRI. ASSESSMENT: Deep 3D densely connected networks were trained under image-level supervision to automatically classify the images and localize the lesions. The dataset was randomly divided into training (1073), validation (157), and testing (307) subsets. STATISTICAL TESTS: Accuracy, sensitivity, specificity, area under receiver operating characteristic curve (ROC), and the McNemar test for breast cancer classification. Dice similarity for breast cancer localization. RESULTS: The final algorithm performance for breast cancer diagnosis showed 83.7% (257 out of 307) accuracy (95% confidence interval [CI]: 79.1%, 87.4%), 90.8% (187 out of 206) sensitivity (95% CI: 80.6%, 94.1%), 69.3% (70 out of 101) specificity (95% CI: 59.7%, 77.5%), with the area under the curve ROC of 0.859. The weakly supervised cancer detection showed an overall Dice distance of 0.501 ± 0.274. DATA CONCLUSION: 3D CNNs demonstrated high accuracy for diagnosing breast cancer. The weakly supervised learning method showed promise for localizing lesions in volumetric radiology images with only image-level labels. LEVEL OF EVIDENCE: 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;50:1144-1151.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Mama/diagnóstico por imagem , Meios de Contraste , Aprendizado Profundo , Feminino , Humanos , Aumento da Imagem/métodos , Pessoa de Meia-Idade , Redes Neurais de Computação , Estudos Retrospectivos , Sensibilidade e Especificidade
11.
Lancet Digit Health ; 1(4): e172-e182, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-33323187

RESUMO

BACKGROUND: Spectral-domain optical coherence tomography (SDOCT) can be used to detect glaucomatous optic neuropathy, but human expertise in interpretation of SDOCT is limited. We aimed to develop and validate a three-dimensional (3D) deep-learning system using SDOCT volumes to detect glaucomatous optic neuropathy. METHODS: We retrospectively collected a dataset including 4877 SDOCT volumes of optic disc cube for training (60%), testing (20%), and primary validation (20%) from electronic medical and research records at the Chinese University of Hong Kong Eye Centre (Hong Kong, China) and the Hong Kong Eye Hospital (Hong Kong, China). Residual network was used to build the 3D deep-learning system. Three independent datasets (two from Hong Kong and one from Stanford, CA, USA), including 546, 267, and 1231 SDOCT volumes, respectively, were used for external validation of the deep-learning system. Volumes were labelled as having or not having glaucomatous optic neuropathy according to the criteria of retinal nerve fibre layer thinning on reliable SDOCT images with position-correlated visual field defect. Heatmaps were generated for qualitative assessments. FINDINGS: 6921 SDOCT volumes from 1 384 200 two-dimensional cross-sectional scans were studied. The 3D deep-learning system had an area under the receiver operation characteristics curve (AUROC) of 0·969 (95% CI 0·960-0·976), sensitivity of 89% (95% CI 83-93), specificity of 96% (92-99), and accuracy of 91% (89-93) in the primary validation, outperforming a two-dimensional deep-learning system that was trained on en face fundus images (AUROC 0·921 [0·905-0·937]; p<0·0001). The 3D deep-learning system performed similarly in the external validation datasets, with AUROCs of 0·893-0·897, sensitivities of 78-90%, specificities of 79-86%, and accuracies of 80-86%. The heatmaps of glaucomatous optic neuropathy showed that the learned features by the 3D deep-learning system used for detection of glaucomatous optic neuropathy were similar to those used by clinicians. INTERPRETATION: The proposed 3D deep-learning system performed well in detection of glaucomatous optic neuropathy in both primary and external validations. Further prospective studies are needed to estimate the incremental cost-effectiveness of incorporation of an artificial intelligence-based model for glaucoma screening. FUNDING: Hong Kong Research Grants Council.


Assuntos
Aprendizado Profundo , Glaucoma/diagnóstico , Doenças do Nervo Óptico/diagnóstico , Ensino , Tomografia de Coerência Óptica , Hong Kong , Humanos
12.
Biometals ; 22(6): 941-9, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19421874

RESUMO

Magnesium-deficiency conditions applied to spinach cultures caused an oxidative stress status in spinach chloroplast monitored by an increase in reactive oxygen species (ROS) accumulation. The enhancement of lipids peroxide of spinach chloroplast grown in magnesium-deficiency media suggested an oxidative attack that was activated by a reduction of antioxidative defense mechanism measured by analysing the activities of superoxide dismutase, catalase, ascorbate peroxidase, guaiacol peroxidase, and glutathione reductase, as well as antioxidants such as carotenoids and glutathione content. As the antioxidative response of chloroplast was reduced in spinach grown in magnesium-deficiency media, it caused a significant reduction of spinach plant weight, old leaves turning chlorosis. However, cerium treatment grown in magnesium-deficiency conditions decreased the malondialdehyde and ROS, and increased activities of the antioxidative defense system, and improved spinach growth. Together, the experimental study implied that cerium could partly substitute for magnesium and increase the oxidative stress-resistance of spinach chloroplast grown in magnesium-deficiency conditions, but the mechanisms need further study.


Assuntos
Antioxidantes/metabolismo , Cério/farmacologia , Cloroplastos/metabolismo , Magnésio/farmacologia , Antioxidantes/farmacologia , Ascorbato Peroxidases , Catalase/análise , Catalase/metabolismo , Cério/metabolismo , Glutationa/análise , Glutationa/metabolismo , Glutationa Redutase/análise , Glutationa Redutase/metabolismo , Peroxidação de Lipídeos/efeitos dos fármacos , Magnésio/metabolismo , Deficiência de Magnésio/metabolismo , Malondialdeído/análise , Oxirredução/efeitos dos fármacos , Estresse Oxidativo/efeitos dos fármacos , Peroxidase/análise , Peroxidase/metabolismo , Peroxidases/análise , Peroxidases/metabolismo , Folhas de Planta/metabolismo , Espécies Reativas de Oxigênio/metabolismo , Spinacia oleracea , Superóxido Dismutase/análise , Superóxido Dismutase/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...